The philosophy of science is concerned with the assumptions, foundations, methods and implications of science. In addition to these central problems for science as a whole, many philosophers of science consider these problems as they apply to particular sciences (e.g. philosophy of biology or philosophy of physics). Some philosophers of science also use contemporary results in science to draw philosophical morals.
Although most practitioners are philosophers, several prominent scientists have contributed to the field and still do. Other prominent scientists have felt that the practical effect on their work is limited: “Philosophy of science is about as useful to scientists as ornithology is to birds,” according to physicist Richard Feynman.
Philosophy of science focuses on metaphysical, epistemic and semantic aspects of science. Ethical issues such as bioethics and scientific misconduct are usually considered ethics or science studies rather than philosophy of science.
Karl Popper contended that the central question in the philosophy of science was distinguishing science from non-science.[1]
Early attempts by the logical positivists grounded science in observation while non-science was non-observational and hence nonsense.[2] Popper argued that the central feature of science was that science aims at falsifiable claims (i.e. claims that can be proven false, at least in principle).[3]
No single unified account of the difference between science and non-science has been widely accepted by philosophers, and some regard the problem as unsolvable or uninteresting.[4]
This problem has taken center stage in the debate regarding evolution and creationism. Scientists say that creationism does not meet the criteria of science and should thus not be treated on equal footing as evolution.[5]
Two central questions about science are (1) what are the aims of science and (2) how should one interpret the results of science? Scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. Conversely, a scientific antirealist or instrumentalist argues that science does not aim (or at least does not succeed) at truth and that we should not regard scientific theories as true.[6] Some antirealists claim that scientific theories aim at being instrumentally useful and should only be regarded as useful, but not true, descriptions of the world.[7] More radical antirealists, like Thomas Kuhn and Paul Feyerabend, have argued that scientific theories do not even succeed at this goal, and that later, more accurate scientific theories are not "typically approximately true" as Popper contended.[8][9]
Realists often point to the success of recent scientific theories as evidence for the truth (or near truth) of our current theories.[10][11][12][13][14] Antirealists point to either the history of science,[15][16] epistemic morals,[7] the success of false modeling assumptions,[17] or widely termed postmodern criticisms of objectivity as evidence against scientific realisms.[18] Some antirealists attempt to explain the success of our theories without reference to truth[7][19] while others deny that our current scientific theories are successful at all.[8][9]
In addition to providing predictions about future events, we often take scientific theories to offer explanations for those that occur regularly or have already occurred. Philosophers have investigated the criteria by which a scientific theory can be said to have successfully explained a phenomenon, as well as what gives a scientific theory explanatory power. One early and influential theory of scientific explanation was put forward by Carl G. Hempel and Paul Oppenheim in 1948. Their Deductive-Nomological (D-N) model of explanation says that a scientific explanation succeeds by subsuming a phenomenon under a general law.[20] Although ignored for a decade, this view was subjected to substantial criticism, resulting in several widely believed counter examples to the theory.[21]
In addition to their D-N model, Hempel and Oppenheim offered other statistical models of explanation which would account for statistical sciences.[20] These theories have received criticism as well.[21] Salmon attempted to provide an alternative account for some of the problems with Hempel and Oppenheim's model by developing his statistical relevance model.[22][23] In addition to Salmon's model, others have suggested that explanation is primarily motivated by unifying disparate phenomena or primarily motivated by providing the causal or mechanical histories leading up to the phenomenon (or phenomena of that type).[23]
Analysis is the activity of breaking an observation or theory down into simpler concepts in order to understand it. Analysis is as essential to science as it is to all rational enterprises. For example, the task of describing mathematically the motion of a projectile is made easier by separating out the force of gravity, angle of projection and initial velocity. After such analysis it is possible to formulate a suitable theory of motion.
Reductionism in science can have several different senses. One type of reductionism is the belief that all fields of study are ultimately amenable to scientific explanation. Perhaps a historical event might be explained in sociological and psychological terms, which in turn might be described in terms of human physiology, which in turn might be described in terms of chemistry and physics.
Daniel Dennett invented the term greedy reductionism to describe the assumption that such reductionism was possible. He claims that it is just 'bad science', seeking to find explanations which are appealing or eloquent, rather than those that are of use in predicting natural phenomena. He also says that:
Arguments made against greedy reductionism through reference to emergent phenomena rely upon the fact that self-referential systems can be said to contain more information than can be described through individual analysis of their component parts. Examples include systems that contain strange loops, fractal organization and strange attractors in phase space. Analysis of such systems is necessarily information-destructive because the observer must select a sample of the system that can be at best partially representative. Information theory can be used to calculate the magnitude of information loss and is one of the techniques applied by Chaos theory.
Science relies on evidence to validate its theories and models. The predictions implied by those theories and models should be in agreement with observation. Ultimately, observations reduce to those made by the unaided human senses: sight, hearing, etc. To be accepted by most scientists, several impartial, competent observers should agree on what is observed. Observations should be repeatable, e.g., experiments that generate relevant observations can be (and, if important, usually will be) done again. Furthermore, predictions should be specific; one should be able to describe a possible observation that would falsify the theory or a model that implies the prediction.
Nevertheless, while the basic concept of empirical verification is simple, in practice, there are difficulties as described in the following sections.
It is not possible for scientists to have tested every incidence of an action, and found a reaction. How is it, then, that they can assert, for example, that Newton's Third Law is universally true? They have, of course, tested many, many actions, and in each one have been able to find the corresponding reaction. But can we be sure that the next time we test the Third Law, it will be found to hold true?
One solution to this problem is to rely on the notion of induction. Inductive reasoning maintains that if a situation holds in all observed cases, then the situation holds in all cases. So, after completing a series of experiments that support the Third Law, one is justified in maintaining that the Law holds in all cases.
Explaining why induction commonly works has been somewhat problematic. One cannot use deduction, the usual process of moving logically from premise to conclusion, because there is simply no syllogism that will allow such a move. No matter how many times 17th century biologists observed white swans, and in how many different locations, there is no deductive path that can lead them to the conclusion that all swans are white. This is just as well, since, as it turned out, that conclusion would have been wrong. Similarly, it is at least possible that an observation will be made tomorrow that shows an occasion in which an action is not accompanied by a reaction; the same is true of any scientific law.
One answer has been to conceive of a different form of rational argument, one that does not rely on deduction. Deduction allows one to formulate a specific truth from a general truth: all crows are black; this is a crow; therefore this is black. Induction somehow allows one to formulate a general truth from some series of specific observations: this is a crow and it is black; that is a crow and it is black; no crow has been seen that is not black; therefore all crows are black.
The problem of induction is one of considerable debate and importance in the philosophy of science: is induction indeed justified, and if so, how?
According to the Duhem-Quine thesis, after Pierre Duhem and W.V. Quine, it is impossible to test a theory in isolation. One must always add auxiliary hypotheses in order to make testable predictions. For example, to test Newton's Law of Gravitation in our solar system, one needs information about the masses and positions of the Sun and all the planets. Famously, the failure to predict the orbit of Uranus in the 19th century led, not to the rejection of Newton's Law, but rather to the rejection of the hypothesis that there are only seven planets in our solar system. The investigations that followed led to the discovery of an eighth planet, Neptune. If a test fails, something is wrong. But there is a problem in figuring out what that something is: a missing planet, badly calibrated test equipment, an unsuspected curvature of space, etc.
One consequence of the Duhem-Quine thesis is that any theory can be made compatible with any empirical observation by the addition of suitable ad hoc hypotheses.
This thesis was accepted by Karl Popper, leading him to reject naïve falsification in favor of 'survival of the fittest', or most falsifiable, of scientific theories. In Popper's view, any hypothesis that does not make testable predictions is simply not science. Such a hypothesis may be useful or valuable, but it cannot be said to be science. Confirmation holism, developed by W.V. Quine, states that empirical data are not sufficient to make a judgment between theories. In this view, a theory can always be made to fit with the available empirical data. However, the fact that empirical evidence does not serve to determine between alternative theories does not necessarily imply that all theories are of equal value, as scientists often use guiding principles such as Occam's Razor.
One result of this view is that specialists in the philosophy of science stress the requirement that observations made for the purposes of science be restricted to intersubjective objects. That is, science is restricted to those areas where there is general agreement on the nature of the observations involved. It is comparatively easy to agree on observations of physical phenomena, harder for them to agree on observations of social or mental phenomena, and difficult in the extreme to reach agreement on matters of theology or ethics (and thus the latter remain outside the normal purview of science).
When making observations, scientists peer through telescopes, study images on electronic screens, record meter readings, and so on. Generally, on a basic level, they can agree on what they see, e.g., the thermometer shows 37.9 C. But, if these scientists have very different ideas about the theories that supposedly explain these basic observations, they can interpret them in very different ways. Ancient scientists interpreted the rising of the Sun in the morning as evidence that the Sun moved. Later scientists deduce that the Earth is rotating. While some scientists may conclude that certain observations confirm a specific hypothesis; skeptical co-workers may yet suspect that something is wrong with the test equipment, for example. Observations when interpreted by a scientist's theories are said to be theory-laden.
Observation involves both perception as well as cognition. That is, one does not make an observation passively, but is also actively engaged in distinguishing the phenomenon being observed from surrounding sensory data. Therefore, observations depend on our underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. More importantly, most scientific observation must be done within a theoretical context in order to be useful. For example, when one observes a measured increase in temperature, that observation is based on assumptions about the nature of temperature and its measurement, as well as assumptions about the way the instrument used to measure the temperature functions. Such assumptions are necessary in order to obtain scientifically useful observations (such as, "the temperature increased by two degrees").
Empirical observation is used to determine the acceptability of some hypothesis within a theory. When someone claims to have made an observation, it is reasonable to ask them to justify their claim. Such justification must include reference to the theory – operational definitions and hypotheses – in which the observation is embedded. That is, the observation is framed in terms of the theory that also contains the hypothesis it is meant to verify or falsify (though of course the observation should not be based on an assumption of the truth or falsity of the hypothesis being tested). This means that the observation cannot serve as an entirely neutral arbiter between competing hypotheses, but can only arbitrate between the hypotheses within the context of the underlying theory.
Thomas Kuhn denied that it is ever possible to isolate the hypothesis being tested from the influence of the theory in which the observations are grounded. He argued that observations always rely on a specific paradigm, and that it is not possible to evaluate competing paradigms independently. By "paradigm" he meant, essentially, a logically consistent "portrait" of the world, one that involves no logical contradictions and that is consistent with observations that are made from the point of view of this paradigm. More than one such logically consistent construct can paint a usable likeness of the world, but there is no common ground from which to pit two against each other, theory against theory. Neither is a standard by which the other can be judged. Instead, the question is which "portrait" is judged by some set of people to promise the most useful in terms of scientific “puzzle solving”.
For Kuhn, the choice of paradigm was sustained by, but not ultimately determined by, logical processes. The individual's choice between paradigms involves setting two or more “portraits" against the world and deciding which likeness is most promising. In the case of a general acceptance of one paradigm or another, Kuhn believed that it represented the consensus of the community of scientists. Acceptance or rejection of some paradigm is, he argued, a social process as much as a logical process. Kuhn's position, however, is not one of relativism.[24] According to Kuhn, a paradigm shift will occur when a significant number of observational anomalies in the old paradigm have made the new paradigm more useful. That is, the choice of a new paradigm is based on observations, even though those observations are made against the background of the old paradigm. A new paradigm is chosen because it does a better job of solving scientific problems than the old one.
The fact that observation is embedded in theory does not mean observations are irrelevant to science. Scientific understanding derives from observation, but the acceptance of scientific statements is dependent on the related theoretical background or paradigm as well as on observation. Coherentism, skepticism, and foundationalism are alternatives for dealing with the difficulty of grounding scientific theories in something more than observations. And, of course, further, redesigned testing may resolve differences of opinion.
Induction attempts to justify scientific statements by reference to other specific scientific statements. It must avoid the problem of the criterion, in which any justification must in turn be justified, resulting in an infinite regress. The regress argument has been used to justify one way out of the infinite regress, foundationalism. Foundationalism claims that there are some basic statements that do not require justification. Both induction and falsification are forms of foundationalism in that they rely on basic statements that derive directly from immediate sensory experience.
The way in which basic statements are derived from observation complicates the problem. Observation is a cognitive act; that is, it relies on our existing understanding, our set of beliefs. An observation of a transit of Venus requires a huge range of auxiliary beliefs, such as those that describe the optics of telescopes, the mechanics of the telescope mount, and an understanding of celestial mechanics. At first sight, the observation does not appear to be 'basic'.
Coherentism offers an alternative by claiming that statements can be justified by their being a part of a coherent system. In the case of science, the system is usually taken to be the complete set of beliefs of an individual scientist or, more broadly, of the community of scientists. W. V. Quine argued for a Coherentist approach to science, as does E O Wilson, though he uses the term consilience (notably in his book of that name). An observation of a transit of Venus is justified by its being coherent with our beliefs about optics, telescope mounts and celestial mechanics. Where this observation is at odds with one of these auxiliary beliefs, an adjustment in the system will be required to remove the contradiction.
“ | William of Ockham (c. 1295–1349) … is remembered as an influential nominalist, but his popular fame as a great logician rests chiefly on the maxim known as Ockham's razor: Entia non sunt multiplicanda praeter necessitatem ["entities must not be multiplied beyond necessity]. No doubt this represents correctly the general tendency of his philosophy, but it has not so far been found in any of his writings. His nearest pronouncement seems to be Numquam ponenda est pluralitas sine necessitate [Plurality must never be posited without necessity], which occurs in his theological work on the Sentences of Peter Lombard (Super Quattuor Libros Sententiarum (ed. Lugd., 1495), i, dist. 27, qu. 2, K). In his Summa Totius Logicae, i. 12, Ockham cites the principle of economy, Frustra fit per plura quod potest fieri per pauciora [It is futile to do with more things that which can be done with fewer]. (Kneale and Kneale, 1962, p. 243) | ” |
The practice of scientific inquiry typically involves a number of heuristic principles that serve as rules of thumb for guiding the work. Prominent among these are the principles of conceptual economy or theoretical parsimony that are customarily placed under the rubric of Ockham's razor, named after the 14th century Franciscan friar William of Ockham who is credited with giving the maxim many pithy expressions, not all of which have yet been found among his extant works.[25]
The motto is most commonly cited in the form "entities should not be multiplied beyond necessity", generally taken to suggest that the simplest explanation tends to be the correct one. As interpreted in contemporary scientific practice, it advises opting for the simplest theory among a set of competing theories that have a comparable explanatory power, discarding assumptions that do not improve the explanation. The "other things being equal" clause is a critical qualification, which rather severely limits the utility of Ockham's razor in real practice, as theorists rarely if ever find themselves presented with competent theories of exactly equal explanatory adequacy.
Among the many difficulties that arise in trying to apply Ockham's razor is the problem of formalizing and quantifying the "measure of simplicity" that is implied by the task of deciding which of several theories is the simplest. Although various measures of simplicity have been brought forward as potential candidates from time to time, it is generally recognized that there is no such thing as a theory-independent measure of simplicity. In other words, there appear to be as many different measures of simplicity as there are theories themselves, and the task of choosing between measures of simplicity appears to be every bit as problematic as the job of choosing between theories. Moreover, it is extremely difficult to identify the hypotheses or theories that have "comparable explanatory power", though it may be readily possible to rule out some of the extremes. Ockham's razor also does not say that the simplest account is to be preferred regardless of its capacity to explain outliers, exceptions, or other phenomena in question. The principle of falsifiability requires that any exception that can be reliably reproduced should invalidate the simplest theory, and that the next-simplest account which can actually incorporate the exception as part of the theory should then be preferred to the first. As Albert Einstein puts it, "The supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience".
It is vitally important for science that the information about the surrounding world and the objects of study be as accurate and as reliable as possible. For the sake of this, measurements which are the source of this information must be as objective as possible. Before the invention of measuring tools (like weights, meter sticks, clocks, etc) the only source of information available to humans were their senses (vision, hearing, taste, tactile, sense of heat, sense of gravity, etc.). Because human senses differ from person to person (due to wide variations in personal chemistry, deficiencies, inherited flaws, etc) there were no objective measurements before the invention of these tools. The consequence of this was the lack of a rigorous science.
With the advent of exchange of goods, trades, and agricultures there arose a need in such measurements, and science (arithmetics, geometry, mechanics, etc) based on standardized units of measurements (stadia, pounds, seconds, etc) was born. To further abstract from unreliable human senses and make measurements more objective, science uses measuring devices (like spectrometers, voltmeters, interferometers, thermocouples, counters, etc) and lately - computers. In most cases, the less human involvement in the measuring process, the more accurate and reliable scientific data are. Currently most measurements are done by a variety of mechanical and electronic sensors directly linked to computers—which further reduces the chance of human error/contamination of information. This made it possible to achieve astonishing accuracy of modern measurements. For example, current accuracy of measurement of mass is about 10−10, of angles—about 10−9, and of time and length intervals in many cases reaches the order of 10−13 - 10−15. This made possible to measure, say, the distance to the Moon with sub-centimeter accuracy (see Lunar laser ranging experiment), to measure slight movement of tectonic plates using GPS system with sub-millimeter accuracy, or even to measure as slight variations in the distance between two mirrors separated by several kilometers as 10−18 m—three orders of magnitude less than the size of a single atomic nucleus—see LIGO.
Another question about the objectivity of observations relates to the so called "experimenter's regress", as well as to other problems identified from the sociology of scientific knowledge: the people that carry out the observations or experiments always have cognitive and social biases that lead them, often in an unconscious way, to introduce their own interpretations and theories into their description of what they are 'seeing'. Some of these arguments can be shown to be of a limited scope, when analysed from a game-theoretic point of view.
In addition to addressing the general questions regarding science and induction, many philosophers of science are occupied by investigating philosophical or foundational problems in particular sciences. The late 20th and early 21st century has seen a rise in the number of practitioners of philosophy of a particular science.
Philosophy of biology deals with epistemological, metaphysical, and ethical issues in the biological and biomedical sciences. Although philosophers of science and philosophers generally have long been interested in biology (e.g., Aristotle, Descartes, and even Kant), philosophy of biology only emerged as an independent field of philosophy in the 1960s and 1970s. Philosophers of science then began paying increasing attention to developments in biology, from the rise of Neodarwinism in the 1930s and 1940s to the discovery of the structure of Deoxyribonucleic acid (DNA) in 1953 to more recent advances in genetic engineering. Other key ideas such as the reduction of all life processes to biochemical reactions as well as the incorporation of psychology into a broader neuroscience are also addressed. In the late 90ies of the 20th century it became increasingly clear that a new philosophy of biology arises which investigates communication processes within and between cells, tissues, organs and even organisms of various kingdoms according non-mechanistic and non-reductive methods such as biosemiotics or the biocommunicative approach. [26]
Philosophy of chemistry considers the methodology and underlying assumptions of the science of chemistry. It is explored by philosophers, chemists, and philosopher-chemist teams.
The philosophy of science has centered on physics for the last several centuries, and during the last century in particular, it has become increasingly concerned with the ultimate constituents of existence, or what one might call reductionism. Thus, for example, considerable attention has been devoted to the philosophical implications of special relativity, general relativity, and quantum mechanics. In recent years, however, more attention has been given to both the philosophy of biology and chemistry, which both deal with more intermediate states of existence.
In the philosophy of chemistry, for example, we might ask, given quantum reality at the microcosmic level, and given the enormous distances between electrons and the atomic nucleus, how is it that we are unable to put our hands through walls, as physics might predict? Chemistry provides the answer, and so we then ask what it is that distinguishes chemistry from physics?
In the philosophy of biology, which is closely related to chemistry, we inquire about what distinguishes a living thing from a non-living thing at the most elementary level. Can a living thing be understood in purely mechanistic terms, or is there, as vitalism asserts, always something beyond mere quantum states?
Issues in philosophy of chemistry may not be as deeply conceptually perplexing as the quantum mechanical measurement problem in the philosophy of physics, and may not be as conceptually complex as optimality arguments in evolutionary biology. However interest in the philosophy of chemistry in part stems from the ability of chemistry to connect the “hard sciences” such as physics with the “soft sciences” such as biology, which gives it a rather distinctive role as the central science.
Philosophy of mathematics is the branch of philosophy that studies the philosophical assumptions, foundations, and implications of mathematics.
Recurrent themes include:
Philosophy of physics is the study of the fundamental, philosophical questions underlying modern physics, the study of matter and energy and how they interact. The main questions concern the nature of space and time, atoms and atomism. Also the predictions of cosmology, the results of the interpretation of quantum mechanics, the foundations of statistical mechanics, causality, determinism, and the nature of physical laws. Classically, several of these questions were studied as part of metaphysics (for example, those about causality, determinism, and space and time).
The French philosopher, Auguste Comte (1798–1857), established the epistemological perspective of positivism in The Course in Positivist Philosophy, a series of texts published between 1830 and 1842. These texts were followed by the 1844 work, A General View of Positivism (published in English in 1865). The first three volumes of the Course dealt chiefly with the physical sciences already in existence (mathematics, astronomy, physics, chemistry, biology), whereas the latter two emphasised the inevitable coming of social science: "sociologie". Observing the circular dependence of theory and observation in science, and classifying the sciences in this way, Comte may be regarded as the first philosopher of science in the modern sense of the term.[27] For him, the physical sciences had necessarily to arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. Comte offers an evolutionary system proposing that society undergoes three phases in its quest for the truth according to a general 'law of three stages'. These are (1) the theological, (2) the metaphysical, and (3) the positive.[28]
Comte's positivism laid the initial philosophical foundations for formal sociology and social research. Durkheim, Marx, and Weber are more typically cited as the fathers of contemporary social science. In psychology, a positivistic approach has historically been favoured in behaviourism. In the early 20th century, logical positivism—a stricter version of Comte's basic thesis but a broadly independent movement— sprang up in Vienna and grew to become one of the dominant movements in Anglo-American philosophy and the analytic tradition. Logical positivists (or 'neopositivists') reject metaphysical assertions and attempt to reduce statements and propositions to pure logic.
The positivist perspective, however, has been associated with 'scientism'; the view that the methods of the natural sciences may be applied to all areas of investigation, be it philosophical, social scientific, or otherwise. Among most social scientists and historians, orthodox positivism has long since fallen out of favor. Today, practitioners of both social and physical sciences recognize the distorting effect of observer bias and structural limitations. This scepticism has been facilitated by a general weakening of deductivist accounts of science by philosophers such as Thomas Kuhn, and new philosophical movements such as critical realism and neopragmatism. Positivism has also been espoused by 'technocrats' who believe in the inevitability of social progress through science and technology.[29] The philosopher-sociologist Jürgen Habermas has critiqued pure instrumental rationality as meaning that scientific-thinking becomes something akin to ideology itself.[30]
Philosophy of economics is the branch of philosophy which studies philosophical issues relating to economics. It can also be defined as the branch of economics which studies its own foundations and morality.
Philosophy of psychology refers to issues at the theoretical foundations of modern psychology. Some of these issues are epistemological concerns about the methodology of psychological investigation. For example:
Other issues in philosophy of psychology are philosophical questions about the nature of mind, brain, and cognition, and are perhaps more commonly thought of as part of cognitive science, or philosophy of mind, such as:
Philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, evolutionary psychology, and artificial intelligence, questioning what they can and cannot explain in psychology.
Philosophy of psychology is a relatively young field, due to the fact that psychology only became a discipline of its own in the late 1800s. Philosophy of mind, by contrast, has been a well-established discipline since before psychology was a field of study at all. It is concerned with questions about the very nature of mind, the qualities of experience, and particular issues like the debate between dualism and monism.
Also, neurophilosophy has become its own field with the works of Paul and Patricia Churchland.
A very broad issue affecting the neutrality of science concerns the areas over which science chooses to explore, so what part of the world and man is studied by science. Since the areas for science to investigate are theoretically infinite, the issue then arises as to what science should attempt to question or find out.
Philip Kitcher in his "Science, Truth, and Democracy"[31] argues that scientific studies that attempt to show one segment of the population as being less intelligent, successful or emotionally backward compared to others have a political feedback effect which further excludes such groups from access to science. Thus such studies undermine the broad consensus required for good science by excluding certain people, and so proving themselves in the end to be unscientific.
Paul Feyerabend argued that no description of scientific method could possibly be broad enough to encompass all the approaches and methods used by scientists. Feyerabend objected to prescriptive scientific method on the grounds that any such method would stifle and cramp scientific progress. Feyerabend claimed, "the only principle that does not inhibit progress is: anything goes."[32] However there have been many opponents to his theory. Alan Sokal and Jean Bricmont wrote the essay "Feyerabend: Anything Goes" about his belief that science is of little use to society.
In his book The Structure of Scientific Revolutions Kuhn argues that the process of observation and evaluation take place within a paradigm. 'A paradigm is what the members of a community of scientists share, and, conversely, a scientific community consists of men who share a paradigm'.[33] On this account, science can be done only as a part of a community, and is inherently a communal activity.
For Kuhn, the fundamental difference between science and other disciplines is in the way in which the communities function. Others, especially Feyerabend and some post-modernist thinkers, have argued that there is insufficient difference between social practices in science and other disciplines to maintain this distinction. It is apparent that social factors play an important and direct role in scientific method, but that they do not serve to differentiate science from other disciplines. Furthermore, although on this account science is socially constructed, it does not follow that reality is a social construct. (See Science studies and the links there.) Kuhn’s ideas are equally applicable to both realist and anti-realist ontologies.
There are, however, those who maintain that scientific reality is indeed a social construct, to quote Quine:
Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer . . . For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits[34]
A major development in recent decades has been the study of the formation, structure, and evolution of scientific communities by sociologists and anthropologists including Michel Callon, Bruno Latour, John Law, Anselm Strauss, Lucy Suchman, and others. Some of their work has been previously loosely gathered in actor network theory. Here the approach to the philosophy of science is to study how scientific communities actually operate.
In the Continental philosophical tradition, science is viewed from a world-historical perspective. One of the first philosophers who supported this view was Georg Wilhelm Friedrich Hegel. Philosophers such as Ernst Mach, Pierre Duhem and Gaston Bachelard also wrote their works with this world-historical approach to science. Nietzsche advanced the thesis in his "The Genealogy of Morals" that the motive for search of truth in sciences is a kind of ascetic ideal.
All of these approaches involve a historical and sociological turn to science, with a special emphasis on lived experience (a kind of Husserlian "life-world"), rather than a progress-based or anti-historical approach as done in the analytic tradition. Two other approaches to science include Edmund Husserl's phenomenology and Martin Heidegger's hermeneutics.
The largest effect on the continental tradition with respect to science was Martin Heidegger's critique of the theoretical attitude in general which of course includes the scientific attitude. For this reason one could suggest that the philosophy of science, in the Continental tradition, has not developed much further due to its inability to overcome Heidegger's criticism.
Notwithstanding, there have been a number of important works: especially a Kuhnian precursor, Alexandre Koyré. Another important development was that of Foucault's analysis of the historical and scientific thought in The Order of Things and his study of power and corruption within the "science" of madness.
Several post-Heideggerian authors contributing to the Continental philosophy of science in the second half of the 20th century include Jürgen Habermas (e.g., "Truth and Justification", 1998), Carl Friedrich von Weizsäcker ("The Unity of Nature", 1980), and Wolfgang Stegmüller ("Probleme und Resultate der Wissenschafttheorie und Analytischen Philosophie", 1973–1986).
|
|
Before the 16th century
16th century 17th century 18th century 19th century
|
1900-1930
1930-1960
|
1960-1980
1980-2000
|
|
|
|
|
|